Explosive effects inside a bermed surface-ievel concrete structure.

FALL 2004

Report Documentation Page

Form Approved

0MB No. 0704-0188

Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington

VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid 0MB control number.

1. REPORT DATE

2qq^ 2. REPORT TYPE

3. DATES COVERED

00-00-2004 to 00-00-2004

4. TITLE AND SUBTITLE

ERDC MSRC Resource. Fall 2004

5a. CONTRACT NUMBER

5b. GRANT NUMBER

5c. PROGRAM ELEMENT NUMBER

6. AUTHOR(S)

5d. PROJECT NUMBER

5e. TASK NUMBER

5f. WORK UNIT NUMBER

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)

U.S. Army Engineer Research and Development Center, ATTN: ERDC MSRC HPC Resource Center, 3909 Halls Ferry Road,Vicksburg,MS,39180-6199

8. PEREORMING ORGANIZATION

REPORT NUMBER

9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)

10. SPONSOR/MONITOR’S ACRONYM(S)

11. SPONSOR/MONITOR’S REPORT NUMBER(S)

12. DISTRIBUTION/AVAILABILITY STATEMENT

Approved for public release; distribution unlimited

13. SUPPLEMENTARY NOTES

14. ABSTRACT

15. SUBJECT TERMS

16. SECURITY CLASSIEICATION OE: 17. LIMITATION OE

ARSTRAUT

18. NUMBER 19a. NAME OE

OE PAGES RESPONSIBLE PERSON

a. REPORT b. ABSTRACT c. THIS PAGE Same aS

unclassified unclassified unclassified Report (SAR)

20

Standard Form 298 (Rev. 8-98)

Prescribed by ANSI Std Z39-18

from the director...

As I was reading through the galley proofs of this edition of the Resource, I was struck by how much every edition of this newsletter looks in many directions at once.

First and foremost, the Resource provides the vital information, tips, and techniques that you need to get your job done more efficiently and effectively using the resources of the Engineer Research and Development Center Major Shared Resource Center (ERDC MSRC). In that respect, every newsletter is very much about giving you information you can use in the here and now.

But behind the articles that focus on the here and now, there is another part of the Resource that looks to the future, focusing on how we are bringing the enor¬ mous talent of the ERDC MSRC to bear on the most important problems you will face as we move forward together in the years ahead.

This edition is no different. Inside its covers you’ll find articles that focus on tools you can use now, like the parallel particle-tracking and code-parallelization

libraries developed by our in-house computational engineering staff and the work that the queue policy working group is doing to make sure your jobs mn as fast as possible. You’ll also find articles that focus on both now and the future, such as those on our recent network and facility infrastructure upgrades that will ensure you will have uninterrupted access to the best computational resources the world has to offer. Round¬ ing off this issue, as with every issue, are the articles that focus on the work we are doing together to lay the foundation for science, engineering, and technology in this country, such as our student programs and the exciting work we are doing to bring solutionHPC to reality.

1 hope you are as excited by what we are doing - and what we will be doing together in the future - as I am. Our biggest task is to balance the future against the present: we have to prepare for tomorrow while ensuring that the DoD can accomplish its mission today. If you'd like to let me know how you think we’re doing, drop me a line at john.e.\vest@.erdc.usuce. army.mii. I’d love to hear from you.

John M/esf Director ERDC MSRC

Front Cover:

A high performance computing SHAMRC (Second-Order Hydrodynamic Automatic Mesh Refinement Code) simulation showing MK-84 weapon delivered to a bermed surface-level concrete structure.

Back Cover:

solutionHPC - Making high performance computing easy for even the novice user. (See article, page 6). Cover designs by the ERDC MSRC Scientific Visualization Center.

Contents

Parallelization of the WASH123D Code Phase II: Coupled Two-Dimensional

Overland and Three-Dimensional Subsurface Flows . 2

Getting Your Work Done: The Queue Policy Working Group . 5

ERDC MSRC Upgrades Facility Infrastructure to Ensure System Availability . 5

’’solutionHPC” on the Way . 6

Nurturing the Next Generation . 7

ERDC MSRC Upgrades Network . 10

ERDC MSRC Exhibits High Profile at Users Group Conference 2004 . 1 1

Off Campus . 12

Mississippi Governor Haley Barbour Visits ERDC . 14

Other Visitors . 15

Acronyms . 16

Training . 16

ERDC MSRC ^ Resource, Fall 2004 1

Parallelization of the WASH123D Code Phase II: Coupled Two-Dimensional Overland and Three-Dimensional Subsurface Flows

By Dr. Ruth Cheng and Bobby Hunter

Supported by the Department of Defense (DoD) Common High Performance Computing Software Support Initiative (CHSSI) project, the parallel WASH 1 23D code is designed to solve watershed problems on parallel scalable computers. The water¬ shed model can solve one aspect of battlespace char¬ acterization of the environment, which includes space, weather, ocean, and soil. DoD sponsors the develop¬ ment of advanced technologies to develop a complete coupled battlespace environment.

WASH123D is modeled as a watershed system involving a coupled system of one-dimensional ( 1 -D) channel network, 2-D overland regime, and 3-D subsurface media. Two environmental issues ^water quantity and water quality are addressed in this model. Two tasks, which are numerical algorithm development and code parallelization, are included in this project to improve the performance of large problem simulations.

Mathematically, 1 -D channel flow and 2-D overland flow are described with the St. Venant equations, which are solved with either the semi-Lagrangian or the Eulerian finite element method. The 3-D subsurface flow is governed by the modifled Richards equations, which is solved with the Eulerian finite element method.

Numerically, a parallel in-element particle-tracking algorithm for unsteady flow is employed to backtrack fictitious particles at vertices to determine the so-called Lagrangian values when the semi-Lagrangian method is used. The PT software' is interfaced with the parallel WASH 1 23D code as the tool to perform the particle¬ tracking applications. DBuilder^ is the software tool developed to provide a parallel programming environ¬ ment.

In WASH123D, different numerical approaches are implemented to solve different components of the coupled system. The parallelization of such a complex model starts with the data structure design and then tackles the programming paradigm. The serial version of the WASH 1 23D code is written in FORTRAN 77. Since C has better software engineering features than

' J.-R. C. Cheng and R Plassmann, “A Parallel Particle Tracking Framework for Applications in Scientific Covaputing” Journal of Supercomputing, 28(2): 149-164, May 2004.

^ R. M. Flunter and J.-R. Cheng, “ERDC MSRC Computa¬ tional Science and Engineering (CS&E) Group Creates a New Tool for Users,” The Resource, pp. 6-9, Spring 2004.

FORTRAN 77 and excellent library support, C is the language used to develop the parallel WASH123D. The computational kernel has no parallelization involved, and it is reused in the parallel version without any modifica¬ tion. Therefore, the data structure design becomes very important when the goals of object orientation, parallelization, software integration, and language interoperability need to be reached.

To account for problem domains that may include 1-D river/stream network, 2-D overland regime, and 3-D subsurface media, three WashMesh objects are constructed in the object WashDomain, which holds the computational domain as sketched in Figure 1 . These three objects describe the three subdomains, on which a set of governing equations is derived to math¬ ematically describe the behavior of the component with the entire domain. Note that the WashDomain also includes a coupling object named WashCouple. The WashGlobal object describes the common phenom¬ ena, and the WashProcinf o object sets up the parallel environment context. Each subdomain (i.e., WashMesh) is partitioned, based on its preferred partitioning criteria, to processors by the DBuilder. Hence, each WashMesh object may include vtxDomain and elementDomain, which are created and managed by DBuilder, to maintain coherent data structures among processors via ghost vertices/ele¬ ments on a given mesh. The WashCouple may include the coupler for ( 1 -D, 2-D), ( 1 -D, 3-D), and/or (2-D, 3-D) interactions. The coupler encapsulates all of the implementation of a Message Passing Interface (MPI) scheme for communication/synchronization between different WashMesh objects. This approach can partition each subdomain (i.e., WashMesh) independently. Therefore, this software tool can be

Dr. Ruth Cheng Bobby Hunter

Computational Engineer Computational Engineer

ERDC MSRC ERDC MSRC

2

ERDC MSRC Resource, Fall 2004

Figure 1. Data Structure Design of the paraitel WASH123D

reused to integrate two or more applications with different physics on multidomains.

The nonlinear 2-D overland flow system can be linearized with the use of the Picard method. The linear¬ ized equation can then be solved by using the particle tracking to compute the total derivative term and by integrating the source/ sink terms along the tracking path. Naturally, this apphcation is perfectly suited for parallel implementation because the dependent variable, either water depth or water stage, can be obtained by solving the linearized equation independently. For such a purpose, the PT software is facilitated with a new pathline computation kernel to accurately track particles under unsteady flow fields.^ The design goal of the PT software development is to interface with different software tools such as Scalable Unstmctured Mesh Algorithms and AppUcations (SUMAASd)'* and different application codes (e.g., FEMWATERand WASH123D). This goal is achieved through a software architecture specifying a lightweight functional interface as shown in Figure 2. To efficiently incorporate different mesh (stmctured or unstmctured) programming environments, the PT software uses an abstract particle-mesh interface (PMI), shown in Figure 2, to interact with the parallel mesh programming environ¬ ment.

^ J.-R. C. Cheng and P. E. Plassmann, “Development of Parallel Particle Tracking Algorithms in Large-scale Unsteady Flows,” SIAM Conference on Parallel Process¬ ing for Scientific Computing, San Francisco, CA, Feb. 25- 27, 2004.

'^Fori Freitag, Mark Jones, Carl Ollivier-Gooch, and Paul Plassmann, “SUMAASd Web page,” http:// www.mcs.anl.gov/sumaa3d, Mathematics and Computer Science Divison, Argonne National Faboratory, 1997.

An application is simulated for the purpose of verification and preliminary performance measurement. The conceptualized area covers about 726 square miles. To compute water flow in a watershed system, the hydro-geologic information, the topographic data, the initial conditions, the boundary conditions, and the sources/sinks must be correctly assigned to the compu¬ tational mesh so that the computed flow result will reflect real flow in practice. The simulation domain consists of 2,888 vertices and 5,614 elements on the 2- D overland mesh, and 46,208 vertices and 84,2 1 0 elements on the 3-D subsurface media (Figure 3). The performance measurement is obtained on the Compaq SC45 at the ERDC MSRC. From Table 1 , one can observe that the parallel efficiency is perfect at the four- way simulation and then drops to less than 50 percent at the 16- way simulation. Figure 4 shows that the communication time dominates the total time, even longer than projected ideal time, because the problem is too small to run on 1 6 processors. To further investigate the detailed wall clock time in communication. Table 2 lists the total number of requests and actual time

Table 1. Performance Measurement

No,of

Processors

(NP)

Total of

Communication Time (sec)

Total

Wall

Clock

Time

(sec)

Paraiiel

Efficiency

2

509

1,9304.1

4

917.84

9,530.66

1.01

8

1,778.18

7,331.57

0.66

16

2,618.07

5,714.42

0.42

Particle Tracking Software Architecture

Absiraci particle mesh interface

gel sel , ^

vurli;.v. Mesh

ClCTTieilU _

_ Programming

PT Software

\ J T

edge

Environment

attributes^

e.g.. SUMAA3d

1

I

User supplied

Particle API :

mutiiics: create

crcatc/retrievc/

retrieve /modify

modify

vertex, element.

panicle auribules

edge allribiJles

Figure 2. The particle-tracking software architecture and its interface to an existing mesh programming environment

ERDC MSRC Resource, Fall 2004

3

spending in communication on different objects with respect to different physical domains. Clearly, the number of requests stays nearly constant no matter how many processors execute the simulation, but the wall clock time increases with the number of processors. Moreover, the communication time spent in the linear solver outnumbers all others, and that spent in the coupler between 2- and 3-D domain can be regarded as minor.

In summary, this phase of the CHSSI project has completed the coupled 2-D overland and 3-D subsurface system. Performance is expected to scale well for large problems. The devel¬ oped tool (DBuilder) and the upgrade tool (PT) are ready for DoD HPC users to shorten their code parallel¬ ization development. In the coming year, tasks involving the 1 -D channel routing and canal structure operations are to be complete.

Table 2. List of Communication Time in Different Domains

=

NP

3-D

vtxDomain

3-D

elementDomain

P =

3-D

bdy Domain

2-D

vtxDomain

2-D

element Domain

r *

Coupler

Update

p - 1

Coupler attach

z

o

r

NC

T

NC

T

NC

T

NC

T

NC

T

NC

T

2

2,647,799

488,38

3

0.0008

2,161

0.14

314,472

19.13

2

0 0001

3.245

1.36

1,081

0.04

4

2,649,200

876.86

3

0.0024

2,161

0.31

314,472

39.74

2

0.001

3.245

0.72

1,081

0.11

8

2,647,606

1,665.5

3

0.019

2,161

0,77

314,472

110.6

2

0.0056

3.245

1,28

1,081

0,31

16

2,644,916

2,385.2

3

0.0124

2,161

1.59

314,527

229.1

2

0.0014

3.245

2 19

1,081

0.64

’’ NC: number of calls; T: time (sec)

Figure 3. 2-D overland and 3-D subsurface mesh for the application

Time measurement on SC45

—t— Total of commynjcation time (sec) -m- Total VVal3 GocK Time fsec}

Idee! Wall OocK Time (sec)

Figure 4. Time measurement on Compaq SC45

4

ERDC MSRC Resource, Fall 2004

Getting Your Work Done: The Queue Policy Working Group

By Bob Alter

A Queue and Purge Policy Working Group has been formed at the ERDC MSRC. The goal of this working group is to provide recommendations on changes that should be made to queue and purge policies to ensure the most efficient alloca¬ tion of computer resources to user jobs. The members of this working group are the staff personnel at the ERDC MSRC who work directly with the users, administrate the HPC systems, and can provide the expertise to implement these changes to the HPC systems.

Some of the recommended policies that are under consideration or have already been adopted include the following:

> Changes to purge policies that prevent purges of users work directory if they have a job in the queue and are utilizing less that 300 Gbytes of workspace.

> Requiring users to put a WALLTIME limit in their batch script to allow for backfilling ofjobs.

> Partitioning ofjobs on the Compaq SC40 and SC45 so that individual jobs would typically get unshared nodes if the users requested a CPU count that is divisible by four.

> Limiting the number ofjobs that a single user can have running at one time on a machine to allow other users a more equal chance of getting their jobs running.

Results so far have shown that large jobs with the highest priority do get scheduled to run at the earliest oppor¬ tunity; but with the new backfilling policy, the machine is kept as full as possible. Evidence of this can be seen in comparing user jobs on the SGIs in April and August where the total numbers ofjobs were almost identical for each month, but the number of large jobs (more than 38 percent of the machine) doubled in August as compared with April, under the new enacted queue policies.

Users who have suggestions and comments for the Queue and Purge Policy Working Group can send them by email to msrchelp@erdc.hpc.mil.

Bob Alter

HPC Applications Specialist ERDC MSRC

ERDC MSRC Upgrades Facility Infrastructure to Ensure System Availability

By Paula Lindsey

As part of ongoing efforts to create an “unbreakable” center, the ERDC MSRC recently upgraded its inffastmcture in two critical areas: facility power and cooling. Reliable power is crucial to the continuous availability of high performance comput¬ ing assets, and in order to ensure such availability, two new battery banks (com¬ prised of 246 dishwasher-size wet cell batteries) were installed in the MSRC powerhouse during the week of September 1 2*. Each battery bank provides 1 ,000 kilowatts of critical power through a rotary uninterruptible power supply (UPS) to the MSRC computers during the time interval between the loss of city power and the completion of diesel-generator startup.

Each new device added to the MSRC computer room floor adds to the heat load inside the facility. To accom¬ modate new equipment additions and to ensure the continued proper cooling of existing systems, three chiller towers that provide process chilled water for cooling in the MSRC were replaced during the week of September 20*. The new chillers increase the cooling capacity from 300 tons to 375 tons. Aremote monitoring system was also installed as part of the upgrade, enabling operators to be quickly alerted to changes in chilled water system status.

The ERDC MSRC is committed to improving the computational environment and experience of its users, and these modifications, while seemingly mundane when compared to higher profile additions to or expansions of large computer systems, go far to ensure that all MSRC systems remain operational in the face of environmental challenges at the host facility.

Paula Lindsey

Integration Life-Cycle Manager ERDC MSRC

ERDC MSRC Resource, Fall 2004

5

“solutionHPC” on the Way

By Scotty Swillie

A rapidly growing desire abounds within the HPC community for a more comprehensive, user-friendly approach to resource access and utilization. New and less experienced users need an entry point that unlocks the power of the HPC world in a way that flattens the learning curve and leads to new discoveries. In response to this need, the ERDC MSRC is launching a new project, “solutionHPC.”

Creating a project circle that users can travel as effortlessly as possible is the overall goal of solutionHPC. The key to making this a reality is by abandoning the old stovepipe model approach and taking a system approach. To do this, barriers between the three major phases of the HPC process - computation, visualization, and storage - are being broken down. This process focuses on ease of access and use as well as how each phase will integrate into a seamless workflow with the other phases. In response to a question about the scope of this project, John E. West, Director, ERDC MSRC, said, “This is a huge undertaking, but one worth tackling because of the wealth of new science that may be realized.”

The central element that will tie the solutionHPC system together will be an Application Programming Interface (API). The API will reside on a secure server inside the HPC network. It will allow a high level of expandability by serving as a platform that sofrware developers can program against. This interface will offer users of all experience levels a complete toolkit that will enable them to fully manage jobs, schedule/view visualizations, store results, and search/access past results. The solutionHPC interface will feature the look and feel of a Web site with links and visual representations of job progress and system availability. The look and feel will be intuitive to a level that allows even the novice user the ability to begin making progress quickly.

“The visual element of solutionHPC will help bring to light discovery in areas that may have been previously hidden within a researcher’s code.”

-John West

Director, ERDC MSRC

An unfortunate fact is that the overwhelming majority of HPC jobs are never visualized effectively. solutionHPC hopes to change this by focusing on the “I see” and “We see” modes of exploration. Using either a batch or interactive method, users will be allowed to select an imaging tool, schedule basic visualizations of a project, and have the images returned for viewing directly within the interface. This process will provide users with the opportunity to visually track the progress of a job at a level never before available. “The visual element of solutionHPC will help bring to light discovery in areas that may have been previously hidden within a researcher’s code,” John West added.

Storage of completed jobs is the final major component within the solutionHPC system. Because of the difficulty users have in retrieving

solutionH^

Scotty Swillie

Special Projects Lead ERDC MSRC

information about what is in stored files, users are missing hidden nuggets of information from files that may be 5 or 1 0 years old that could be useful when planning new work. solutionHPC will offer a storage component that is powerful and user-friendly. File transfer/ retrieval will be greatly simplified, and a database will be used to store metadata about each job. This will allow the interface to display a listing of relevant jobs by user, title, software used, or one of many other descriptors. This smart storage component of solutionHPC will complete the project circle.

Once the individual compo¬ nents are complete, they will be blended into the comprehensive solutionHPC system. The finished system will offer powerful research tools with one-stop access to the world of high performance computing.

The resulting ease of use and increased turnaround times will allow higher levels of research to be done in shorter time frames. This swings open wide the door of discovery and allows the users of solutionHPC to realize the full potential of their research.

6

ERDC MSRC 4* Resource, Fall 2004

Nurturing the Next Generation

attracting the best and brightest students to science and engineering remains one of the top priorities for the ERDC M5RC In Its effort to help ensure technological leadership for the next generation. During the past 13 months. It has devoted V, Q90 total contact hours touuard this effort Luith 1, 199 of those hours concentrating on minority students. During the summer of 9009, the ERDC M5RC participated In three major outreach efforts that luIII help shape the future of the science and engineering ujork force.

By Reginald Liddell

The ERDC MSRC and the High Performance Computing Modernization Program Programming Environment and Training (PET) program, in partnership with Jackson State University (JSU), held its 6th HPC Summer Institute for graduate and under¬ graduate students from minority serving institutions nationwide. On Monday, June 14, the Institute conducted orientation and opening sessions for 14 students representing six universities for 2 weeks of HPC learning activities.

John E. West, ERDC MSRC Director, gave a presentation to Institute participants and faculty entitled “User Perspectives on HPC and How to Prepare for Careers in Supercomputing.” The lecture focused on the many uses of supercomputing in the scientific and engineering communities as well as on the open research issues in HPC and possible career paths for young researchers. Dr. Joe Thompson, Mississippi State University (MSU), and Paul Adams, ERDC MSRC Scientific Visualization Center Lead, hosted lectures on “Supercomputing in Mississippi” and “Scientific Visualiza¬ tion,” respectively.

The Institute coordinated a tour of the ERDC MSRC research site hosted by Dr. Wayne Mastin and Reggie Liddell, both of the PET program. Dr. Deborah Dent, ERDC Information Technology Laboratory (ITL) Deputy Director, spoke with the students during lunch. She expressed the importance of academic achievement, work ethic, and the ongoing demand for top quality IT professionals in the HPC industry. Week One Institute activities concluded with a tour of the Engineering Research Center at MSU with Dr. Robert Moorehead serving as point of contact.

Program activities designed to introduce participants to the world of HPC, included ( 1 ) Building Clusters with Linux Servers, (2) Applications of Scientific Visualization, (3) Introductions to Virtual Reality, (4) High Fidelity Simulation, (5) Foundations and Applications of Computational Fluid Dynamics, and (6) Applications ofEnabling Technologies. Students paired into six groups, each assigned to a mentor for project and presentation support. HPC practitioners from ERDC, JSU, MSU, and the University of Alabama, Birmingham (UAB), served as mentors for the students, who enabled them to help connect course material to real-time hands-on examples of typical HPC ventures.

Week Two learning activities and closing ceremonies concluded at UAB, Friday June 25. Many of the students showed keen interest in the possibilities and the vast array of HPC opportunities. Edward Tate, a senior computer science major at Southern University, remarked of how overwhelming and impressed he was to see the level of technology housed at ERDC. Senior computer science majors from Hampton University, Ashley Brooks and Roshawnda Walker, explained how their classmate Brittany Owens raved about her 2003 summer intern experi¬ ence, which inspired them to apply for the 2004 HPC Summer Institute.

more . . .

ERDC MSRC Resource, Fall 2004

7

The Institute is part of a longstanding series of Institutes held by the MSRC, PET, and JSU to expand awareness of, and interest in, seienee, engineering, and supercomputing among traditionally underrepresented groups. The selected engineering and computer science majors for the Institute for 2004 included six students from JSU, Jackson, MS, two students from Hampton University,

Hampton, VA, one student from Mississippi Valley State University, Itta Bena, MS, one student from Alabama A&M University,

Huntsville, AL, three students from South¬ ern University-Baton Rouge, Baton Rouge,

LA, and one student from Rust College,

Holly Springs, MS.

Those interested in applying for the program should contact Dr. Loretta Moore by e-mail at loretta.a.moore@jsums.edu or telephone at (601) 979-2105.

JSU Institute participants are shown with ERDC MSRC personnel PET Component 3 Program Manager Dr. Wayne Mastin (far left), ERDC MSRC Director John E. West (second from left), and PET Component 3 Technical Advisor Bob Athow (far right)

By Rose J. Dykes

Thirty-nine high school juniors and seniors from Louisiana and Mississippi attended the second annual S.A.M.E.

(Society of American Military Engineers)

Engineering and Consfruction Camp held at ERDC, Vicksburg, Mississippi, during the week of June 6-12. Dr. Michael Stephens, ERDC MSRC Scientific Visual¬ ization Center (SVC) and the Army High Performance Computing Research Center, served as one of the instructors when the students visited the Information Technol¬ ogy Laboratory.

Dr. Stephens demonstrated an interac¬ tive computational fluid dynamics (CFD) application on the SVC’s Immersadesk.

He then allowed the students to drive the Dr. Michael Stephens, ERDC MSRC, demonstrates an interactive

application where they moved around in computational fluid dynamics virtual reality application on the

the fluid flow and released virtual smoke Immersadesk

streams to investigate the flow fields.

The purpose of this camp is to give high school students hands-on experience in engineering and construction skills in Vicksburg’s wide-ranging engineering community as the students perform various curriculum activities at each of the ERDC-Vicksburg laboratories. The Vicksburg and Louisiana Posts of S.A.M.E. host the camp. Professional engineers and volunteers from local engineering organizations supervise the program.

ERDC MSRC Resource, Fall 2004

8

By Dr. Wayne Mastin

The PET Summer Intern program eompleted another sueeessflil 1 0-week term on August 6, 2004. The group of students this summer consisted of a record number of six interns, including students from Pennsylvania, Arizona, and Hawaii.

The first few years of the PET Summer Intern program operated on a modest scale. From 1 997 through 2000, Clark Atlanta University sent one student to ERDC each summer. If the intern program is judged only on the basis of contributing to the DoD work force, these first few years were a success. The last intern provided through Clark Atlanta, Richard (Rikk) Anderson, subsequently was hired at the ERDC MSRC and developed the Big Brother HPC machine monitoring system.

Beginning in the summer of 2001 , students were recmited from all colleges and universities and funded directly by PET. The program is now administered by the University of Hawaii, under contract with Mississippi State University. Students are recruited from across the Nation for all four of the High Performance Computing Mod¬ ernization Program (HPCMP) MSRCs. Although no preference is given to local institutions, generally about half of the interns come from Mississippi and adjoining states. Many of the recent interns are still pursuing their education. One intern from the summer of 2003, Leelinda Parker, graduated from Jackson State University and accepted a position with the Army Research Laboratory.

This summer’s interns addressed a wide variety of challenging problems in collaboration with their mentors in the ERDC MSRC. Jennifer Williams, an applied mathematics major at the University of Alabama in Birmingham, worked on the analysis and visualization of HPC performance data. This work will assist her mentor. Dr. Bill Ward, and others in the Computational Science and Engineering group in the interpretation of performance charac¬ teristics of ERDC MSRC hardware and software resources.

Omar Rodriguez, computer science major at Arizona State University, returned for his second summer as an intern at ERDC. He continued working with his mentor. Dr. Wayne Mastin, on efforts supporting the PET Online Knowledge Center by developing and implementing a structure for the PET program’s online training resources.

Jian-Zhong (Dan) Li, computer science major from Bucknell University, developed scripts for his mentor. Jay Clibum, to improve the system monitoring capabilities for the MSRC’s HPC machines.

Si Loi Leung, mechanical engineering major at the University of Hawaii, worked with his mentor, Nathan Prewitt, on the investigation of techniques used in automatic grid generation for complex geometric shapes.

Corey Bordelon, a computer science major at the University of Louisiana, Monroe, developed a Web-based task manager system for the Desktop Support team. Corey’s work was done in collaboration with his mentor, Morris Ramsey, along with Rikk Anderson and Karen Mauldin.

The final intern for the summer of 2004 was Kylie Nash, a computer science major at Jackson State University. She worked with her mentor, Paul Adams, on several visualization projects during the 4 weeks she spent at the ERDC MSRC.

The success of the PET Summer Intern program is due not only to the interns and mentors but also to the efforts of the PET staff: of significant note are Anne Mullins, who provided administrative and logistics support, and Reggie Liddell, who organized the intern lecture series that enriched the educational experience of the interns. The students indicated their internship was a rewarding experience and expressed an interest in returning to Vicksburg next summer.

Students interested in applying for next year’s program should contact Dr. Susan Brown by e-mail

at stbrown@hawaii.edu or telephone at (808) 956-2808 . and Jian-Zhong (Dan) Li

PET interns shown (left to right) are Corey Bordelon, Si Loi Leung, Jennifer Williams, Omar Rodriguez,

ERDC MSRC Resource, Fall 2004

9

ERDC MSRC Upgrades Network

By Jay Cliburn

Prior to this upgrade, the ERDC MSRC was not behind a firewall; now it is. Occasionally, a network configuration change at the user end may result in blocked IP traffic at the MSRC firewall.

Over a 4-day period in July, the ERDC MSRC replaced its entire internal network infrastructure in order to provide redundancy, higher fault tolerance, and improved performance to local and remote users and staff. A central architectural feature of the new configuration includes dual, redundant, highly available path¬ ways from the Defense Research and Development Network (DREN) to HPC and mass storage systems. Specific network components of interest include the following.

> Dual Netscreen 5200 firewall devices to detect and deter unauthorized intrusion, and to provide detailed net¬ work traffic logging information to assist in post-intmsion forensics. These devices also provide border router functionality for the MSRC.

> Dual high-speed Foundry Bigiron 4000 core switches to provide 1 0 gigabit connectivity to the DREN and pass 1 gigabit traffic into the MSRC.

> Dual high-speed Foundry Bigiron 1 5000 distribution switches to distribute 1 gigabit traffic throughout the MSRC.

Jay Cliburn

Technical Operations Manager ERDC MSRC

> Dual 1 gigabit network interfaces for the mass storage system and most HPC system user logins, and dual gigabit data pipes between HPC systems and the mass storage archive for redundant data transfers to and from the mass storage system.

> Addition of 1 00/ 1 ,000 megabit per second (Mbps) capability to staff desktops and other edge devices.

The new network will be IPv6 compatible and also supports 9,000-byte Ethernet network packets, called “jumbo frames.” Typical Ethernet packets are 1 ,500 bytes in size; jumbo frames provide for more efficient data transfer, resulting in increased throughput, especially for large files that are common in scientific computing. The new network is also 1 0-gigabit ready throughout, enabling the next uptick in network speed without the need for chassis replacement of network gear and the associated lengthy downtime and user disruption accompanying that activity. (Translated, a “forklift upgrade” won’t be required to go to 1 0 gigE.)

Of particular interest to users (and their respective local network administration staffs) is the addition of the Netscreen firewall devices. Prior to this upgrade, the ERDC MSRC was not behind a firewall; now it is. Occa¬ sionally, a network configuration change at the user end may result in blocked IP traffic at the MSRC firewall. If you suspect this to be the case, please contact the Customer Assistance Center for help.

The ERDC MSRC is pleased to provide this improved network infrastructure to its users. The network is faster, more resilient, and more secure than ever before, and it is extensible to the next generation of LAN bandwidth, standing well to provide service to the HPCMP community for the next several years.

The ERDC MSRC welcomes comments and suggestions regarding the Resource and invites article submissions. Please send submissions to the following e-mail address:

msrchelp@erdc.hpc.mil

10

ERDC MSRC Resource, Fall 2004

ERDC MSRC Exhibits High Profiie at Users Group Conference 2004

By Rose J. Dykes

Personnel from the HPCMP computing centers and the users of these resources gathered in Williamsburg, Virginia, June 7-1 1, for the Users Group Conference 2004, where a forum was pro¬ vided for communication, training, and discussion of HPC and its impact on science and technology. ERDC MSRC team members participated by con¬ ducting tutorials and presenting technical papers and other talks, thereby helping to ensure that the DoD community has the tools and expertise to make the best use of state-of-the-art systems.

On the first day of the Conference, Paul Adams, Tom Biddlecome, and Richard Walters, all of the ERDC MSRC Scientific Visualization Center, conducted a full-day Bring Your Own Data Workshop. Dr. William Ward and other members of the ERDC MSRC Computational Science and Engineering group along with Cray, Inc., staff conducted a full-day tutorial entitled “Cray XI Code Migration and Optimization.”

Dr. Fred Tracy presented two technical papers at the Conference, one entitled “Optimizing Finite Element Programs on the Cray XI Using Coloring Schemes” and the other “A Direct Out-of-Core Solver for CGWAVE on the Cray XI ,” stressing that sizeable speedup can be achieved on the ERDC Cray XI by modifying key algorithms and using specialized compiler directives.

Other presentations by ERDC MSRC team mem¬ bers were as follows:

“DBuilder: A Parallel Data Management Toolkit for Scientific Applications” by Bobby Hunter; “Performance writer/Editor

c .^1- .1- 1 r .t- r ERDC MSRC

of a Synthetic Application of

Benchmarking on Several High Performance Comput¬ ing Platforms” by Dr. Paul Bennett; “Experiences Profiling and Characterizing DoD HPC Applications” by Dr. William Ward; “A Configuration Exploration” by Carrie Leach; “OKC: AQuick Look at the PET Online Knowledge Center” by Lesa Nelson; “RSS: Using Really Simple Syndication to Keep Users Informed” by Glen Browning; and “Memory Debugging of Parallel Programs: Issues, Tools, and Experiences” by Dr. Jeff Hensley.

A highlight at the Conference for the ERDC MSRC was having its Scientific Visualization Lead, Paul Adams, receive the HPCMP “Technical Excellence” award. This is given to the nominee who best demon¬ strates scientific or engineering excellence using HPCMP resources in a creative and effective manner. Paul along with his team at ERDC accomplished this by integration of scientific data with animation, as shown in the graphic below in which an isosurface of water values is rendered as realistically as possible.

Large-eddy simulation of steep breaking waves and thin spray sheets around a ship

ERDC MSRC Resource, Fall 2004

11

Dr. Fred Tracy presents one of his two paper presentations at the Conference

Dr. Bob Maier (center) and Paul Adams (far right) talk about work at the ERDC MSRC with a visitor at the Conference Poster Session

Glen Browning makes his RSS NewsFeed presentation to a standing-room only crowd at the Conference

12

ERDC MSRC Resource, Fall 2004

Cray User Group Conference

Drs. Sam Cable and Thomas Oppe of the Computational Science and Engineering (CS&E) group at the ERDC MSRC presented a paper entitled “Optimization of LESlieSD for the Cray X 1 Architecture” at the Cray User Group conference held in Knoxville, TN, on May 17-21. LESlieSD is a Computational Fluid Dynamics (CFD) code developed at Georgia Tech that is used for solving turbulent reacting flows such as those occurring in combustion chambers. A version of this code for structured meshes was optimized for the XI architecture by the insertion of compiler directives to increase vectorization and multistreaming efficiency. The optimized code ran approximately 20 percent faster than the original code and performed more vector operations. In addition, a comparison of the original and optimized versions of LESlieSD was made using hardware counter data obtained from the ERDC X 1 and from an IBM p690 computer at the Naval Oceano¬ graphic Office (NAVO) MSRC. For a 1 6-processor run of LESlieSD using a grid of 192^ the optimized code ran over 12 times faster on the XI than on the IBM at the MSP level and three times faster on the SSP level. The code also exhibited better cache behavior on the XI than on the IBM.

Dr. Sam Cable

Computational Engineer CS&E, ERDC MSRC

Dr. Thomas Oppe

Computational Engineer CS&E, ERDC MSRC

University of Southern Mississippi

Dr. Bill Ward, Lead for the ERDC MSRC CS&E group, visited the University of Southern Mississippi and spoke to students on the subject of research ethics. The audience included students from “Introduction to Computing” and “Com¬ puter Ethics” classes in the Computer Science (CS) Department and from a “Computer Ethics” class in the Criminal Justice Department. Dr. Ward began with some general remarks on morality, ethics, and integrity and then discussed several case studies related to ethics in research. Ms. Nancy Howell, the Dr. Bill Ward instructor for the CS ethics class described the presentation as “on target” and

CS&E Group Lead said shc hoped to repeat the presentation during the fall semester.

ERDC MSRC

ERDC MSRC Resource, Fall 2004

13

Mississippi Governor Haley Barbour Visits ERDC

Governor Haley Barbour visited ERDC on August 6, 2004. The Governor, who took office earlier this year, is visiting government agencies and industry around the state of Mississippi to familiarize himself with their missions and capabilities. Abriefmg on the organization’s mission to support the warfighter and the civil works activities of the Corps of Engineers was presented to him before he toured each of the ERDC’s four Vicksburg laboratories.

At the ERDC MSRC, Governor Barbour toured the HPC center viewing the computing resources and learning about its computing capability. The MSRC’s state-of-the-art techniques for data interpretation were also discussed. In addition, the Governor was made aware that the MSRC realizes the importance of attracting the best and brightest students to science and engineering and, therefore, continually participates in programs that promote this.

Governor Barbour arrives at the ERDC Information Technology Laboratory

Dr. Jeffery Holland (far right), ERDC Information Technology Laboratory Director, talks with Mississippi Governor Haley Barbour and others during the Governor’s visit to the MSRC, August 6, 2004

Information Technology Laboratory Director Dr. Jeffery Holland discusses a scientific visualization project with Governor Barbour

14

ERDC MSRC Resource, Fall 2004

(Left to right) David Stinson,

ERDC MSRC Assistant Director, with COL Manuei Fuentes, Inspector General, Joint Force Headquarters, Mississippi Army National Guard, and Miguel Gonzalez, U.S. Army Engineer District, Vicksburg,

Vidalia, LA, Area Office,

September 15, 2004

(Left to right) COL Todd Semonite, Office, Chief of Engineers, The Pentagon, Dr. Bob Maier, ERDC MSRC Assistant Director, and COL Jim Rowan, ERDC Commander,

June 4, 2004

Greg Rottman (far left), ERDC MSRC Assistant Director, with U.S. Army Engineer Division, South Atlantic, Emerging Leaders, May 12, 2004

ERDC MSRC Resource, Fall 2004

15

Below is a list of acronyms commonly used among the DoD HPC community. You will find these acro¬ nyms throughout the articles in this newsletter.

API

Application Programming Interface

CFD

Computational Fluid Dynamics

CPU

Central Processing Unit

CS

Computer Science

CS&E

Computational Science and Engineering

DoD

Department of Defense

DREN

Defense Research and Development Network

ERDC

Engineer Research and Development Center

HPC

High Performance Computing

HPCMP

HPC Modernization Program

ITL

Information Technology Laboratory

JSU

Jackson State University

MPI

Message Passing Interface

MSP

Multistreaming Processor

MSRC

Major Shared Resource Center

MSU

Mississippi State University

NAVO

Naval Oceanographic Office

PET

Programming Environment and Training

PMI

Particle-Mesh Interface

PT

Particle Tracking

SHAMRC

Second-Order Hydrodynamic Automatic Mesh Refinement Code

SSP

Single-Streaming Processor

SVC

Scientific Visualization Center

UAB

University of Alabama, Birmingham

For the latest on PET training and on-line registration, please go to the On-line Knowledge Center Web site:

https://okc. erdc. hpc. mil

Questions and comments may be directed to PET training at (601) 634-3131, (601) 634-4024, or PET-Training@erdc.usace.army.mil

16

ERDC MSRC Resource, Fall 2004

ERDC MSRC Newsletter Editorial Staff

Interdisciplinary Coordinator

Mary L. “Dean” Hampton

Chief Editor

Rose J. Dykes

Visual Information Specialist

Betty Watson

ERDC MSRC Customer Assistance Center

Web site: www.erdc.hpc.mil E-mail: msrchelp@erdc.hpc.mil Telephone: 1-800-500-4722

r 1

The contents of this newsletter are not to be used for advertising, publication, or promotional purposes. Citation of trade names does not constitute an official endorsement or approval of the use of such commercial products.

Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the DoD. Your comments, ideas, and contributions are welcome. Design and layout provided by the Visual Production Center, Information Technology Laboratory,

U.S. Army Engineer Research and Development Center.

Approved for public release; distribution is unlimited.

solutionam

Making High Performance Computing easy for even the novice user.

Easy as 1 2 3... Compute, Visualize, and Store data utiiizing a dynamic, interactive graphic user interface - making High Performance Computing easy as 1 2 3.

For. more,, see page 6

2R

U.S. ARMY ENGINEER RESEARCH AND DEVELOPMENT CENTER INFORMATION TECHNOLOGY LABORATORY